In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our system can load high-level descriptions of chemistry experiments, perceive a dynamic workspace, and autonomously plan the required actions and motions to perform the given chemistry experiments with common tools found in the existing lab environment. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools. In this work, we demonstrate the utility of our framework on three pouring skills and two foundational chemical experiments for materials synthesis: solubility and recrystallization. More experiments and updated evaluations can be found at https://ac-rad.github.io/arc-icra2023.
translated by 谷歌翻译
我们存在从单个图像预测材料,液体和物体的3D结构,掩模和物体的3D结构,掩模和物体的方法,而无需先验地了解图像源和相机参数。透明容器中的操纵材料在许多领域是必不可少的,并且依赖视力。这项工作提供了一种新的程序生成的数据集,由透明容器内的液体和固体物体的50k图像组成。图像注释包括3D模型,材料属性(颜色/透明度/粗糙度......)以及船舶的分段掩模及其内容。使用13K不同的物体,500种不同的环境(HDRI)和1450种材料纹理(PBR)与模拟液体和程序生成的容器组合的合成(CGI)部分。此外,我们还提供104个现实世界的物体图像,内部透明船只与船舶的深度图及其内容。我们提出了一种相机不可知论方法,其从图像中预测3D模型作为XYZ地图。这允许训练的网络将3D模型预测为每个像素的XYZ坐标的地图,而无需先验到图像源。为了计算训练损失,我们使用3D模型内的点对之间的距离而不是绝对XYZ坐标。这使得损失函数翻译不变。我们使用它来预测从单个图像预测血管的3D模型及其内容。最后,我们展示了一种使用单个图像来预测血管含量和表面的材料特性的网络。
translated by 谷歌翻译
Unsupervised pixel-level defective region segmentation is an important task in image-based anomaly detection for various industrial applications. The state-of-the-art methods have their own advantages and limitations: matrix-decomposition-based methods are robust to noise but lack complex background image modeling capability; representation-based methods are good at defective region localization but lack accuracy in defective region shape contour extraction; reconstruction-based methods detected defective region match well with the ground truth defective region shape contour but are noisy. To combine the best of both worlds, we present an unsupervised patch autoencoder based deep image decomposition (PAEDID) method for defective region segmentation. In the training stage, we learn the common background as a deep image prior by a patch autoencoder (PAE) network. In the inference stage, we formulate anomaly detection as an image decomposition problem with the deep image prior and domain-specific regularizations. By adopting the proposed approach, the defective regions in the image can be accurately extracted in an unsupervised fashion. We demonstrate the effectiveness of the PAEDID method in simulation studies and an industrial dataset in the case study.
translated by 谷歌翻译
由于在工业应用中的深度学习技术的采用随着速度和规模的增加而增长,因此,深入学习模型的成功部署经常涉及有助于数据的可用性,体积和质量。在本文中,我们在循环设置下解决有效数据标记和注释验证的问题。我们展示了自我监督的视觉表示学习领域的最新进步可能导致工具和方法,这些工具和方法使自然图像数据集的策策和工程有益,降低注释成本和增加的注释质量。我们通过利用自我监督的半监督学习来提出统一框架,并使用它来构造数据标签和注释验证任务的工作流程。我们展示了我们工作流量对现有方法的有效性。在主动学习任务上,我们的方法在CIFAR10上实现了97.0%的高精度,带有0.1%的注释数据,CIFAR100上的83.9%顶级精度为10%注释数据。在使用50%的错误标签中学习时,我们的方法在CIFAR10和CIFAR100上的CIFAR10和85.5%的高精度达到了97.4%的高精度。
translated by 谷歌翻译
在本文中,我们着重于分析使用大型材料数据库材料识别的触觉传感的热模式。许多因素会影响热识别性能,包括传感器噪声,传感器和物体的初始温度,材料的热积液以及接触时间。为了分析这些因素对热识别的影响,我们使用了一个半无限固体的热模型来模拟来自CES Edupack Level-1数据库中所有材料的热传输数据。我们使用支持矢量机(SVM)来预测2346个材料对的二元材料识别的F1分数。我们还使用配备了热传感器的真实机器人收集了数据,并分析了其在66个现实世界对的材料识别性能。此外,我们分析了对模型进行模拟数据培训并在实体机器人数据上进行测试时的性能。我们的模型预测了模拟数据的0.980 F1分数的材料识别性能,现实世界中具有恒定初始传感器温度的现实世界数据的0.994 F1得分,现实世界数据的0.966 F1得分具有不同的初始传感器温度,并且0.815 SIM到运行转移的F1分数。最后,我们根据从这些结果中获得的见解提供了一些有关传感器设计和参数选择的准则。我们发布了模拟和实体机器人数据集,以供机器人社区进一步使用。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译